Thinking
Cyber Resiliency is an Emergent Property
A comprehensive and aggressive development process must add security throughout the entire lifecycle of a system: design, development, and deployment. A system must include security principles and practices during its design phase. During its development phase, the system design must incorporate secure coding standards, security-enhancing technology, and vulnerability-reducing practices. In short, security must be built-in because it cannot be bolted on afterward.
A Note For Government Procurement: Security must be Baked into the Contracts
Hellebore’s core business provides software development, project, and acquisition management to U.S. Government customers. Through this experience, we have developed more mature ways of helping our clients build software security into their procurement contracts. According to the GAO, the only way to ensure that security gets built-in is to specifically include cyber security requirements and testing criteria in contract language. Unfortunately, due to the skill gap between customer and provider and the fast-moving nature of the cybersecurity field, it can be difficult to know what to ask for and harder to define security acceptance criteria. Without making security a requirement, you won’t get secure software.
Where Projects Try, and Fall Short
There are six key areas that, by themselves, are not enough to provide cyber security for a project sufficiently:
- Developing a working design
- Hiring good developers
- Using mature libraries
- Testing
- Using Modern Tools
- Deployment
Let’s address each of these key areas.
A Working Design isn’t Enough
There’s more than one way to solve every problem, but not all methods are equal. Some solutions are inherently more secure than others. To ensure that a design is secure, we must perform threat modeling to identify the risks. Only then can we incorporate the security principles such as “fail-safe,” “deny by default,” “zero trust,” “least privilege,” and “defense in depth” to mitigate risk. Without a requirement for threat modeling and principal-based secure design, you won’t get secure software.
Good Developers aren’t Enough
Computer science curriculums do not typically require even a single credit of software security or cyber security-specific instruction. Therefore, most software engineers working today have no formal security training. Just picking the “best” developers is no guarantee that software is securely built. Secure coding standards exist for most major programming languages and some specific problem domains, such as SEI CERT, MITRE CWE, MISRA, NASA JPL, and more. Specifying an appropriate coding standard is an easy way to eliminate bad code and encourage good code. Without a requirement for secure coding practices, you won’t get secure software.
Mature Libraries aren’t Enough
Attacks only get better over time, and secure libraries today could be vulnerable libraries tomorrow. Modern software routinely uses dozens to hundreds of open source and 3rd party software libraries to speed up development, add features, and integrate with other services. But when you include external code, you also include its weaknesses and vulnerabilities. To address this, you must know what’s in your software. Using software composition analysis (SCA) to produce a software bill of materials (SBOM), makes it possible to check all components for common vulnerabilities and exposures (CVE), security advisories, configuration recommendations, and licenses. Armed with this information, you can begin to mitigate, reduce, or accept any inherited risk. Without a requirement for a software bill of materials, you won’t get secure software.
Functionality Testing isn’t Enough
Traditional software testing verifies the correctness of the solution, nothing more. A correct solution, however, may still include exploits and remain vulnerable to attackers. Correctness is orthogonal to security, which means that correct code is not necessarily secure. Code is made secure by incorporating explicit security testing during all phases of development. Static application security testing (SAST) checks for flaws in source code. Dynamic application security testing (DAST) scans for flaws in running programs. Interactive application security testing (IAST) allows an application security engineer to check for defects in complex, critical, and challenging-to-test areas of the system. Runtime monitoring watches deployed programs, detects bad behavior, and reacts by triggering correction mechanisms. Without a requirement for security-specific testing, you won’t get secure software.
Modern Tools aren’t Enough
Modern compilers include mature, well-tested exploit mitigation techniques such as stack smashing protection, data execution prevention (DEP), and address space layout randomization (ASLR). These techniques are freely available and can be added to the software automatically with little to no impact on performance. Unfortunately, these features are not always enabled by default to ensure backward compatibility, which means they often go unused. Without a requirement for security enhancement techniques, you won’t get secure software.
Secure Software Builds aren’t Enough
Even secure software can be deployed in an insecure manner. Services are misconfigured, unneeded features remain enabled, and authentication credentials are weak or reused. This occurs because the environment that software operates in is complex, and therefore the configuration of that software is often equally complex. Arriving at a correct configuration is often an iterative process, and only by version controlling the configuration files can you prevent regression and configuration drift. Authentication keys and crypto certificates hardcoded into files get leaked and forgotten. Using a secrets management tool, you can automatically generate new, unique, and strong authentication secrets for each deployment without the risk of leaking them in your source code. Without a requirement for version-controlled configurations, secrets management, and automated deployment, you won’t get secure software.
Summary
Getting it right once isn’t enough. Cybersecurity is asymmetric: we have to do everything right, but an attacker only needs to find one thing wrong. We also have to get everything right every time the system requires an update. The solution is to build security into every software construction phase using a robust, automated, and repeatable process that allows for fast development cycles and continuous improvement. This is DevSecOps. When security is built into the process, security is built into the product. Without a requirement for DevSecOps, you won’t get secure software.
Hellebore
2900 Presidential Drive, Suite 155
Beavercreek, OH 45324
(833) 694 8496
info@hellebore.com